Minimum Divergence Estimators, Maximum Likelihood and the Generalized Bootstrap
نویسندگان
چکیده
This paper states that most commonly used minimum divergence estimators are MLEs for suited generalized bootstrapped sampling schemes. Optimality in the sense of Bahadur associated tests fit under such is considered.
منابع مشابه
Weighted Sampling, Maximum Likelihood and Minimum Divergence Estimators
This paper explores Maximum Likelihood in parametric models in the context of Sanov type Large Deviation Probabilities. MLE in parametric models under weighted sampling is shown to be associated with the minimization of a specific divergence criterion defined with respect to the distribution of the weights. Some properties of the resulting inferential procedure are presented; Bahadur efficiency...
متن کاملOn the Maximum Likelihood Estimators for some Generalized Pareto-like Frequency Distribution
Abstract. In this paper we consider some four-parametric, so-called Generalized Pareto-like Frequency Distribution, which have been constructed using stochastic Birth-Death Process in order to model phenomena arising in Bioinformatics (Astola and Danielian, 2007). As examples, two ”real data” sets on the number of proteins and number of residues for analyzing such distribution are given. The co...
متن کاملGeneralized Empirical Likelihood Estimators
In an effort to improve the small sample properties of generalized method of moments (GMM) estimators, a number of alternative estimators have been suggested. These include empirical likelihood (EL), continuous updating, and exponential tilting estimators. We show that these estimators share a common structure, being members of a class of generalized empirical likelihood (GEL) estimators. We us...
متن کاملGeneralized Maximum Spacing Estimators
The maximum spacing (MSP) method, introduced by Cheng and Amin (1983) and independently by Ranneby (1984), is a general method for estimating param eters in univariate continuous distributions and is known to give consistent and asymptotically efficient estimates under general conditions. This method can be derived from an approximation based on simple spacings of the Kullback-Leibler informat...
متن کاملThe Convergence of Lossy Maximum Likelihood Estimators
Given a sequence of observations (Xn)n≥1 and a family of probability distributions {Qθ}θ∈Θ, the lossy likelihood of a particular distribution Qθ given the data Xn 1 := (X1,X2, . . . ,Xn) is defined as Qθ(B(X 1 ,D)), where B(Xn 1 ,D) is the distortion-ball of radius D around the source sequence X n 1 . Here we investigate the convergence of maximizers of the lossy likelihood.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Entropy
سال: 2021
ISSN: ['1099-4300']
DOI: https://doi.org/10.3390/e23020185